On Lower Complexity Bounds for Large-Scale Smooth Convex Optimization

نویسندگان

  • Cristobal Guzman
  • Arkadi Nemirovski
چکیده

We derive lower bounds on the black-box oracle complexity of large-scale smooth convex minimization problems, with emphasis on minimizing smooth (with Hölder continuous, with a given exponent and constant, gradient) convex functions over high-dimensional ‖ · ‖p-balls, 1 ≤ p ≤ ∞. Our bounds turn out to be tight (up to logarithmic in the design dimension factors), and can be viewed as a substantial extension of the existing lower complexity bounds for large-scale convex minimization covering the nonsmooth case and the “Euclidean” smooth case (minimization of convex functions with Lipschitz continuous gradients over Euclidean balls). As a byproduct of our results, we demonstrate that the classical Conditional Gradient algorithm is near-optimal, in the sense of Information-Based Complexity Theory, when minimizing smooth convex functions over highdimensional ‖ · ‖∞-balls and their matrix analogies – spectral norm balls in the spaces of square matrices.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convex programming under a linear optimization oracle The Complexity of Large-scale Convex Programming under a Linear Optimization Oracle

This paper considers a general class of iterative optimization algorithms, referred to as linearoptimization-based convex programming (LCP) methods, for solving large-scale convex programming (CP) problems. The LCP methods, covering the classic conditional gradient (CndG) method (a.k.a., Frank-Wolfe method) as a special case, can only solve a linear optimization subproblem at each iteration. In...

متن کامل

The Complexity of Large-scale Convex Programming under a Linear Optimization Oracle

This paper considers a general class of iterative optimization algorithms, referred to as linear-optimizationbased convex programming (LCP) methods, for solving large-scale convex programming (CP) problems. The LCP methods, covering the classic conditional gradient (CG) method (a.k.a., Frank-Wolfe method) as a special case, can only solve a linear optimization subproblem at each iteration. In t...

متن کامل

Oracle Complexity of Second-Order Methods for Smooth Convex Optimization

Second-order methods, which utilize gradients as well as Hessians to optimize a given function, are of major importance in mathematical optimization. In this work, we study the oracle complexity of such methods, or equivalently, the number of iterations required to optimize a function to a given accuracy. Focusing on smooth and convex functions, we derive (to the best of our knowledge) the firs...

متن کامل

On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization

The problem of stochastic convex optimization with bandit feedback (in the learning community) or without knowledge of gradients (in the optimization community) has received much attention in recent years, in the form of algorithms and performance upper bounds. However, much less is known about the inherent complexity of these problems, and there are few lower bounds in the literature, especial...

متن کامل

Tight Complexity Bounds for Optimizing Composite Objectives

We provide tight upper and lower bounds on the complexity of minimizing the average of m convex functions using gradient and prox oracles of the component functions. We show a significant gap between the complexity of deterministic vs randomized optimization. For smooth functions, we show that accelerated gradient descent (AGD) and an accelerated variant of SVRG are optimal in the deterministic...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • J. Complexity

دوره 31  شماره 

صفحات  -

تاریخ انتشار 2015